Goto

Collaborating Authors

 adversarially robust classification


On the Role of Randomization in Adversarially Robust Classification

Neural Information Processing Systems

Deep neural networks are known to be vulnerable to small adversarial perturbations in test data. To defend against adversarial attacks, probabilistic classifiers have been proposed as an alternative to deterministic ones. However, literature has conflicting findings on the effectiveness of probabilistic classifiers in comparison to deterministic ones. In this paper, we clarify the role of randomization in building adversarially robust classifiers.Given a base hypothesis set of deterministic classifiers, we show the conditions under which a randomized ensemble outperforms the hypothesis set in adversarial risk, extending previous results.Additionally, we show that for any probabilistic binary classifier (including randomized ensembles), there exists a deterministic classifier that outperforms it. Finally, we give an explicit description of the deterministic hypothesis set that contains such a deterministic classifier for many types of commonly used probabilistic classifiers, randomized ensembles and parametric/input noise injection.


On the Role of Randomization in Adversarially Robust Classification

Neural Information Processing Systems

Deep neural networks are known to be vulnerable to small adversarial perturbations in test data. To defend against adversarial attacks, probabilistic classifiers have been proposed as an alternative to deterministic ones. However, literature has conflicting findings on the effectiveness of probabilistic classifiers in comparison to deterministic ones. In this paper, we clarify the role of randomization in building adversarially robust classifiers.Given a base hypothesis set of deterministic classifiers, we show the conditions under which a randomized ensemble outperforms the hypothesis set in adversarial risk, extending previous results.Additionally, we show that for any probabilistic binary classifier (including randomized ensembles), there exists a deterministic classifier that outperforms it. Finally, we give an explicit description of the deterministic hypothesis set that contains such a deterministic classifier for many types of commonly used probabilistic classifiers, i.e. randomized ensembles and parametric/input noise injection.


A Finer Calibration Analysis for Adversarial Robustness

Awasthi, Pranjal, Mao, Anqi, Mohri, Mehryar, Zhong, Yutao

arXiv.org Machine Learning

W e present a more general analysis of H -calibration for adversarially robust classification. By adopting a finer definition of calibration, we can cover setti ngs beyond the restricted hypothesis sets studied in previous work. In particular, our results ho ld for most common hypothesis sets used in machine learning. W e both fix some previous calibration re sults ( Bao et al., 2020) and generalize others ( A wasthi et al., 2021). Moreover, our calibration results, combined with the pre vious study of consistency by A wasthi et al. ( 2021), also lead to more general H -consistency results covering common hypothesis sets.